chart show
The Impact of Artificial Intelligence on Enterprise Decision-Making Process
Górka, Ernest, Baran, Dariusz, Wojak, Gabriela, Ćwiąkała, Michał, Zupok, Sebastian, Starkowski, Dariusz, Reśko, Dariusz, Okrasa, Oliwia
Artificial intelligence improves enterprise decision-making by accelerating data analysis, reducing human error, and supporting evidence-based choices. A quantitative survey of 92 companies across multiple industries examines how AI adoption influences managerial performance, decision efficiency, and organizational barriers. Results show that 93 percent of firms use AI, primarily in customer service, data forecasting, and decision support. AI systems increase the speed and clarity of managerial decisions, yet implementation faces challenges. The most frequent barriers include employee resistance, high costs, and regulatory ambiguity. Respondents indicate that organizational factors are more significant than technological limitations. Critical competencies for successful AI use include understanding algorithmic mechanisms and change management. Technical skills such as programming play a smaller role. Employees report difficulties in adapting to AI tools, especially when formulating prompts or accepting system outputs. The study highlights the importance of integrating AI with human judgment and communication practices. When supported by adaptive leadership and transparent processes, AI adoption enhances organizational agility and strengthens decision-making performance. These findings contribute to ongoing research on how digital technologies reshape management and the evolution of hybrid human-machine decision environments.
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
These four charts show where AI companies could go next in the US
While the impact of AI on tech hubs like San Francisco and Boston is already being felt, AI proponents believe it will transform work everywhere, and in every industry. The report uses various proxies for what the researchers call "AI readiness" to document how unevenly this supposed transformation is taking place. Here are four charts to help understand where that could matter. Brookings divides US cities into five categories based on how ready they are to adopt AI-related industries and job offerings. To do so, it looked at local talent pool development, innovations in local institutions, and adoption potential among local companies.
- North America > United States > California > San Francisco County > San Francisco (0.29)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.06)
- North America > United States > Wisconsin (0.06)
- (5 more...)
ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild
Masry, Ahmed, Thakkar, Megh, Bajaj, Aayush, Kartha, Aaryaman, Hoque, Enamul, Joty, Shafiq
Given the ubiquity of charts as a data analysis, visualization, and decision-making tool across industries and sciences, there has been a growing interest in developing pre-trained foundation models as well as general purpose instruction-tuned models for chart understanding and reasoning. However, existing methods suffer crucial drawbacks across two critical axes affecting the performance of chart representation models: they are trained on data generated from underlying data tables of the charts, ignoring the visual trends and patterns in chart images, and use weakly aligned vision-language backbone models for domain-specific training, limiting their generalizability when encountering charts in the wild. We address these important drawbacks and introduce ChartGemma, a novel chart understanding and reasoning model developed over PaliGemma. Rather than relying on underlying data tables, ChartGemma is trained on instruction-tuning data generated directly from chart images, thus capturing both high-level trends and low-level visual information from a diverse set of charts. Our simple approach achieves state-of-the-art results across $5$ benchmarks spanning chart summarization, question answering, and fact-checking, and our elaborate qualitative studies on real-world charts show that ChartGemma generates more realistic and factually correct summaries compared to its contemporaries. We release the code, model checkpoints, dataset, and demos at https://github.com/vis-nlp/ChartGemma.
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > Canada > Quebec (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (13 more...)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Banking & Finance (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.93)
Are Large Vision Language Models up to the Challenge of Chart Comprehension and Reasoning? An Extensive Investigation into the Capabilities and Limitations of LVLMs
Islam, Mohammed Saidul, Rahman, Raian, Masry, Ahmed, Laskar, Md Tahmid Rahman, Nayeem, Mir Tafseer, Hoque, Enamul
Natural language is a powerful complementary modality of communication for data visualizations, such as bar and line charts. To facilitate chart-based reasoning using natural language, various downstream tasks have been introduced recently such as chart question answering, chart summarization, and fact-checking with charts. These tasks pose a unique challenge, demanding both vision-language reasoning and a nuanced understanding of chart data tables, visual encodings, and natural language prompts. Despite the recent success of Large Language Models (LLMs) across diverse NLP tasks, their abilities and limitations in the realm of data visualization remain under-explored, possibly due to their lack of multi-modal capabilities. To bridge the gap, this paper presents the first comprehensive evaluation of the recently developed large vision language models (LVLMs) for chart understanding and reasoning tasks. Our evaluation includes a comprehensive assessment of LVLMs, including GPT-4V and Gemini, across four major chart reasoning tasks. Furthermore, we perform a qualitative evaluation of LVLMs' performance on a diverse range of charts, aiming to provide a thorough analysis of their strengths and weaknesses. Our findings reveal that LVLMs demonstrate impressive abilities in generating fluent texts covering high-level data insights while also encountering common problems like hallucinations, factual errors, and data bias. We highlight the key strengths and limitations of chart comprehension tasks, offering insights for future research.
- North America > Canada > Alberta (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Belgium (0.04)
- (26 more...)
ChartThinker: A Contextual Chain-of-Thought Approach to Optimized Chart Summarization
Liu, Mengsha, Chen, Daoyuan, Li, Yaliang, Fang, Guian, Shen, Ying
Data visualization serves as a critical means for presenting data and mining its valuable insights. The task of chart summarization, through natural language processing techniques, facilitates in-depth data analysis of charts. However, there still are notable deficiencies in terms of visual-language matching and reasoning ability for existing approaches. To address these limitations, this study constructs a large-scale dataset of comprehensive chart-caption pairs and fine-tuning instructions on each chart. Thanks to the broad coverage of various topics and visual styles within this dataset, better matching degree can be achieved from the view of training data. Moreover, we propose an innovative chart summarization method, ChartThinker, which synthesizes deep analysis based on chains of thought and strategies of context retrieval, aiming to improve the logical coherence and accuracy of the generated summaries. Built upon the curated datasets, our trained model consistently exhibits superior performance in chart summarization tasks, surpassing 8 state-of-the-art models over 7 evaluation metrics. Our dataset and codes are publicly accessible.
- Asia > China > Guangdong Province > Guangzhou (0.04)
- North America > United States > Texas (0.04)
- North America > United States > Pennsylvania (0.04)
- (10 more...)
The brief history of artificial intelligence: The world has changed fast – what might be next? - Big Think
To see what the future might look like it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future. How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient to us today. Mobile phones in the '90s were big bricks with tiny green displays.
The brief history of artificial intelligence: The world has changed fast – what might be next? - Our World in Data
The language and image recognition capabilities of AI systems have developed very rapidly. The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in five different domains, from handwriting recognition to language understanding. Within each of the five domains the initial performance of the AI system is set to -100, and human performance in these tests is used as a baseline that is set to zero. This means that when the model's performance crosses the zero line is when the AI system scored more points in the relevant test than the humans who did the same test.2
Learning to simulate realistic limit order book markets from data as a World Agent
Coletta, Andrea, Moulin, Aymeric, Vyetrenko, Svitlana, Balch, Tucker
Multi-agent market simulators usually require careful calibration to emulate real markets, which includes the number and the type of agents. Poorly calibrated simulators can lead to misleading conclusions, potentially causing severe loss when employed by investment banks, hedge funds, and traders to study and evaluate trading strategies. In this paper, we propose a world model simulator that accurately emulates a limit order book market -- it requires no agent calibration but rather learns the simulated market behavior directly from historical data. Traditional approaches fail short to learn and calibrate trader population, as historical labeled data with details on each individual trader strategy is not publicly available. Our approach proposes to learn a unique "world" agent from historical data. It is intended to emulate the overall trader population, without the need of making assumptions about individual market agent strategies. We implement our world agent simulator models as a Conditional Generative Adversarial Network (CGAN), as well as a mixture of parametric distributions, and we compare our models against previous work. Qualitatively and quantitatively, we show that the proposed approaches consistently outperform previous work, providing more realism and responsiveness.
Teaching a Neural Network to Play Cards
I chose this game because the difficulty level is just about right so that it is easy to learn but it still has some twists that make it interesting. The result of the project is an Android app where you can play against AI-based opponents with adjustable difficulty level. Queen of Spades is a trick-taking card game that is related to Hearts. The game is for 4 players and is played with a deck of 32 cards. There are four suits: Hearts, Diamonds, Spades, and Clubs. Each suit has the following ranks in descending order: Ace, King, Queen, Jack, 10, 9, 8, and 7.
Machine Learning Approaches for Type 2 Diabetes Prediction and Care Management
Lim, Aloysius, Singh, Ashish, Chiam, Jody, Eckert, Carly, Kumar, Vikas, Ahmad, Muhammad Aurangzeb, Teredesai, Ankur
Prediction of diabetes and its various complications has been studied in a number of settings, but a comprehensive overview of problem setting for diabetes prediction and care management has not been addressed in the literature. In this document we seek to remedy this omission in literature with an encompassing overview of diabetes complication prediction as well as situating this problem in the context of real world healthcare management. We illustrate various problems encountered in real world clinical scenarios via our own experience with building and deploying such models. In this manuscript we illustrate a Machine Learning (ML) framework for addressing the problem of predicting Type 2 Diabetes Mellitus (T2DM) together with a solution for risk stratification, intervention and management. These ML models align with how physicians think about disease management and mitigation, which comprises these four steps: Identify, Stratify, Engage, Measure.
- North America > United States (1.00)
- Asia (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.92)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.67)